A quasi-Newton proximal splitting method
نویسندگان
چکیده
A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification.
منابع مشابه
Adaptive Fista
In this paper we propose an adaptively extrapolated proximal gradient method, which is based on the accelerated proximal gradient method (also known as FISTA), however we locally optimize the extrapolation parameter by carrying out an exact (or inexact) line search. It turns out that in some situations, the proposed algorithm is equivalent to a class of SR1 (identity minus rank 1) proximal quas...
متن کاملProximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates
In [19], a general, inexact, efficient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of...
متن کاملProximal Quasi-Newton Methods for Convex Optimization
In [19], a general, inexact, e cient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of t...
متن کاملProximal quasi-Newton methods for nondifferentiable convex optimization
This paper proposes an implementable proximal quasi-Newton method for minimizing a nondifferentiable convex function f in <n . The method is based on Rockafellar’s proximal point algorithm and a cutting-plane technique. At each step, we use an approximate proximal point p(xk) of xk to define a vk ∈ ∂ k f(p(xk))with k ≤ α‖vk‖,where α is a constant. The method monitors the reduction in the value ...
متن کاملProximal Quasi-Newton for Computationally Intensive L1-regularized M-estimators
We consider the class of optimization problems arising from computationally intensive `1-regularized M -estimators, where the function or gradient values are very expensive to compute. A particular instance of interest is the `1-regularized MLE for learning Conditional Random Fields (CRFs), which are a popular class of statistical models for varied structured prediction problems such as sequenc...
متن کامل